skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Piyawattanametha, Wibool"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Imaging of surface-enhanced Raman scattering (SERS) nanoparticles (NPs) has been intensively studied for cancer detection due to its high sensitivity, unconstrained low signal-to-noise ratios, and multiplexing detection capability. Furthermore, conjugating SERS NPs with various biomarkers is straightforward, resulting in numerous successful studies on cancer detection and diagnosis. However, Raman spectroscopy only provides spectral data from an imaging area without co-registered anatomic context. This is not practical and suitable for clinical applications. Here, we propose a custom-made Raman spectrometer with computer-vision-based positional tracking and monocular depth estimation using deep learning (DL) for the visualization of 2D and 3D SERS NPs imaging, respectively. In addition, the SERS NPs used in this study (hyaluronic acid-conjugated SERS NPs) showed clear tumor targeting capabilities (target CD44 typically overexpressed in tumors) by anex vivoexperiment and immunohistochemistry. The combination of Raman spectroscopy, image processing, and SERS molecular imaging, therefore, offers a robust and feasible potential for clinical applications. 
    more » « less
    Free, publicly-accessible full text available January 1, 2026
  2. Traditionally, a high-performance microscope with a large numerical aperture is required to acquire high-resolution images. However, the images’ size is typically tremendous. Therefore, they are not conveniently managed and transferred across a computer network or stored in a limited computer storage system. As a result, image compression is commonly used to reduce image size resulting in poor image resolution. Here, we demonstrate custom convolution neural networks (CNNs) for both super-resolution image enhancement from low-resolution images and characterization of both cells and nuclei from hematoxylin and eosin (H&E) stained breast cancer histopathological images by using a combination of generator and discriminator networks so-called super-resolution generative adversarial network-based on aggregated residual transformation (SRGAN-ResNeXt) to facilitate cancer diagnosis in low resource settings. The results provide high enhancement in image quality where the peak signal-to-noise ratio and structural similarity of our network results are over 30 dB and 0.93, respectively. The derived performance is superior to the results obtained from both the bicubic interpolation and the well-known SRGAN deep-learning methods. In addition, another custom CNN is used to perform image segmentation from the generated high-resolution breast cancer images derived with our model with an average Intersection over Union of 0.869 and an average dice similarity coefficient of 0.893 for the H&E image segmentation results. Finally, we propose the jointly trained SRGAN-ResNeXt and Inception U-net Models, which applied the weights from the individually trained SRGAN-ResNeXt and inception U-net models as the pre-trained weights for transfer learning. The jointly trained model’s results are progressively improved and promising. We anticipate these custom CNNs can help resolve the inaccessibility of advanced microscopes or whole slide imaging (WSI) systems to acquire high-resolution images from low-performance microscopes located in remote-constraint settings. 
    more » « less
  3. Metasurfaces have been studied and widely applied to optical systems. A metasurface-based flat lens (metalens) holds promise in wave-front engineering for multiple applications. The metalens has become a breakthrough technology for miniaturized optical system development, due to its outstanding characteristics, such as ultrathinness and cost-effectiveness. Compared to conventional macro- or meso-scale optics manufacturing methods, the micro-machining process for metalenses is relatively straightforward and more suitable for mass production. Due to their remarkable abilities and superior optical performance, metalenses in refractive or diffractive mode could potentially replace traditional optics. In this review, we give a brief overview of the most recent studies on metalenses and their applications with a specific focus on miniaturized optical imaging and sensing systems. We discuss approaches for overcoming technical challenges in the bio-optics field, including a large field of view (FOV), chromatic aberration, and high-resolution imaging. 
    more » « less
  4. Growing demands for affordable, portable, and reliable optical microendoscopic imaging devices are attracting research institutes and industries to find new manufacturing methods. However, the integration of microscopic components into these subsystems is one of today’s challenges in manufacturing and packaging. Together with this kind of miniaturization more and more functional parts have to be accommodated in ever smaller spaces. Therefore, solving this challenge with the use of microelectromechanical systems (MEMS) fabrication technology has opened the promising opportunities in enabling a wide variety of novel optical microendoscopy to be miniaturized. MEMS fabrication technology enables abilities to apply batch fabrication methods with high-precision and to include a wide variety of optical functionalities to the optical components. As a result, MEMS technology has enabled greater accessibility to advance optical microendoscopy technology to provide high-resolution and high-performance imaging matching with traditional table-top microscopy. In this review the latest advancements of MEMS actuators for optical microendoscopy will be discussed in detail. 
    more » « less
  5. Magnetic particle imaging (MPI) is an emerging noninvasive molecular imaging modality with high sensitivity and specificity, exceptional linear quantitative ability, and potential for successful applications in clinical settings. Computed tomography (CT) is typically combined with the MPI image to obtain more anatomical information. Herein, a deep learning‐based approach for MPI‐CT image segmentation is presented. The dataset utilized in training the proposed deep learning model is obtained from a transgenic mouse model of breast cancer following administration of indocyanine green (ICG)‐conjugated superparamagnetic iron oxide nanoworms (NWs‐ICG) as the tracer. The NWs‐ICG particles progressively accumulate in tumors due to the enhanced permeability and retention (EPR) effect. The proposed deep learning model exploits the advantages of the multihead attention mechanism and the U‐Net model to perform segmentation on the MPI‐CT images, showing superb results. In addition, the model is characterized with a different number of attention heads to explore the optimal number for our custom MPI‐CT dataset. 
    more » « less
  6. Abstract Multispectral optoacoustic tomography (MSOT) is a beneficial technique for diagnosing and analyzing biological samples since it provides meticulous details in anatomy and physiology. However, acquiring high through‐plane resolution volumetric MSOT is time‐consuming. Here, we propose a deep learning model based on hybrid recurrent and convolutional neural networks to generate sequential cross‐sectional images for an MSOT system. This system provides three modalities (MSOT, ultrasound, and optoacoustic imaging of a specific exogenous contrast agent) in a single scan. This study used ICG‐conjugated nanoworms particles (NWs‐ICG) as the contrast agent. Instead of acquiring seven images with a step size of 0.1 mm, we can receive two images with a step size of 0.6 mm as input for the proposed deep learning model. The deep learning model can generate five other images with a step size of 0.1 mm between these two input images meaning we can reduce acquisition time by approximately 71%. 
    more » « less